人工智能一直在全球转变产业和学术研究,研究软件开发也不例外。在研究软件开发生命周期的各个方面都应用了机器学习和深度学习,从新算法设计范例到软件开发过程。在本文中,我们讨论了我们对当今挑战和机会的看法,即AI在研究软件开发和工程师中展示了我们在佛罗里达大学的方法,正在为AI的新时代做好准备我们的劳动力。
translated by 谷歌翻译
This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants$\unicode{x2014}$what we call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world$\unicode{x2014}$also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: i.e., inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph. Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent's generative world model provide a common ground or frame of reference. Active inference plays a foundational role in this ecology of belief sharing$\unicode{x2014}$leading to a formal account of collective intelligence that rests on shared narratives and goals. We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first$\unicode{x2014}$and key$\unicode{x2014}$step towards such an ecology.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
鉴于HEP研究的核心,数据科学(DS)和机器学习(ML)在高能量物理学(HEP)中的作用增长良好和相关。此外,利用物理数据固有的对称性激发了物理信息的ML作为计算机科学研究的充满活力的子场。 HEP研究人员从广泛使用的材料中受益匪浅,可用于教育,培训和劳动力开发。他们还为这些材料做出了贡献,并为DS/ML相关的字段提供软件。物理部门越来越多地在DS,ML和物理学的交集上提供课程,通常使用HEP研究人员开发的课程,并涉及HEP中使用的开放软件和数据。在这份白皮书中,我们探讨了HEP研究与DS/ML教育之间的协同作用,讨论了此交叉路口的机会和挑战,并提出了将是互惠互利的社区活动。
translated by 谷歌翻译
大语言模型的兴起的一个关注点是它们可能造成重大伤害的潜力,尤其是在偏见,淫秽,版权和私人信息方面进行预处理。新兴的道德方法试图过滤预处理的材料,但是这种方法是临时的,未能考虑到上下文。我们提供了一种以法律为基础的过滤方法,该方法直接解决了过滤材料的权衡。首先,我们收集并提供了一堆法律,这是一个256GB(以及增长)的开源英语法律和行政数据数据集,涵盖法院意见,合同,行政规则和立法记录。对一堆法律进行预处理可能有助于解决有望改善司法接触的法律任务。其次,我们提炼政府已制定的法律规范将有毒或私人内容限制为可行的研究人员,并讨论我们的数据集如何反映这些规范。第三,我们展示了一堆法律如何为研究人员提供直接从数据中学习此类过滤规则的机会,从而为基于模型的处理提供了令人兴奋的新研究方向。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
It is common practice in deep learning to represent a measurement of the world on a discrete grid, e.g. a 2D grid of pixels. However, the underlying signal represented by these measurements is often continuous, e.g. the scene depicted in an image. A powerful continuous alternative is then to represent these measurements using an implicit neural representation, a neural function trained to output the appropriate measurement value for any input spatial location. In this paper, we take this idea to its next level: what would it take to perform deep learning on these functions instead, treating them as data? In this context we refer to the data as functa, and propose a framework for deep learning on functa. This view presents a number of challenges around efficient conversion from data to functa, compact representation of functa, and effectively solving downstream tasks on functa. We outline a recipe to overcome these challenges and apply it to a wide range of data modalities including images, 3D shapes, neural radiance fields (NeRF) and data on manifolds. We demonstrate that this approach has various compelling properties across data modalities, in particular on the canonical tasks of generative modeling, data imputation, novel view synthesis and classification. Code: https://github.com/deepmind/functa
translated by 谷歌翻译
本文提出了一种数据驱动方法,用于使用收缩理论从离线数据学习收敛控制策略。收缩理论使得构建一种使闭环系统轨迹固有地朝向独特的轨迹的策略构成策略。在技​​术水平,识别收缩度量,该收缩度量是关于机器人的轨迹表现出收缩的距离度量通常是非琐碎的。我们建议共同了解控制政策及其相应的收缩度量,同时执行收缩。为此,我们从由机器人的状态和输入轨迹组成的离线数据集中学习机器人系统的隐式动态模型。使用此学习的动态模型,我们提出了一种用于学习收缩策略的数据增强算法。我们随机生成状态空间中的样本,并通过学习的动态模型在时间上向前传播,以生成辅助样本轨迹。然后,我们学习控制策略和收缩度量,使得来自离线数据集的轨迹之间的距离和我们生成的辅助样品轨迹随时间的减小。我们评估我们提出的模拟机器人目标达成任务的拟议框架的表现,并证明了执行收缩的速度较快,较快的收敛性和更大的学习政策的鲁棒性。
translated by 谷歌翻译
迄今为止,迄今为止,众所周知,对广泛的互补临床相关任务进行了全面比较了医学图像登记方法。这限制了采用研究进展,以防止竞争方法的公平基准。在过去五年内已经探讨了许多新的学习方法,但优化,建筑或度量战略的问题非常适合仍然是开放的。 Learn2reg涵盖了广泛的解剖学:脑,腹部和胸部,方式:超声波,CT,MRI,群体:患者内部和患者内部和监督水平。我们为3D注册的培训和验证建立了较低的入境障碍,这帮助我们从20多个独特的团队中汇编了65多个单独的方法提交的结果。我们的互补度量集,包括稳健性,准确性,合理性和速度,使得能够独特地位了解当前的医学图像登记现状。进一步分析监督问题的转移性,偏见和重要性,主要是基于深度学习的方法的优越性,并将新的研究方向开放到利用GPU加速的常规优化的混合方法。
translated by 谷歌翻译
最近已经提出了与紧急磁化动态的互连磁纳环阵列用于储层计算应用,但是对于它们进行计算有用,必须可以优化其动态响应。在这里,我们使用一种现象学模型来证明可以通过调整使用旋转磁场将数据的缩放和输入速率控制到系统中的超级参数来优化这些储存器。我们使用任务独立的指标来评估每组上的这些超参数的戒指的计算能力,并展示这些指标如何直接关联与口头和书面识别任务中的性能相关联。然后,我们通过扩展储库的输出来包括环阵列磁态的多个并发度量,可以进一步改善这些度量。
translated by 谷歌翻译